The Group-Lasso: `1,∞ Regularization versus `1,2 Regularization

نویسندگان

  • Julia E. Vogt
  • Volker Roth
چکیده

The `1,∞ norm and the `1,2 norm are well known tools for joint regularization in Group-Lasso methods. While the `1,2 version has been studied in detail, there are still open questions regarding the uniqueness of solutions and the efficiency of algorithms for the `1,∞ variant. For the latter, we characterize the conditions for uniqueness of solutions, we present a simple test for uniqueness, and we derive a highly efficient active set algorithm that can deal with input dimensions in the millions. We compare both variants of the Group-Lasso for the two most common application scenarios of the Group-Lasso, one is to obtain sparsity on the level of groups in “standard” prediction problems, the second one is multi-task learning where the aim is to solve many learning problems in parallel which are coupled via the Group-Lasso constraint. We show that both version perform quite similar in “standard” applications. However, a very clear distinction between the variants occurs in multi-task settings where the `1,2 version consistently outperforms the `1,∞ counterpart in terms of prediction accuracy.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Moreau-Yosida Regularization for Grouped Tree Structure Learning

We consider the tree structured group Lasso where the structure over the features can be represented as a tree with leaf nodes as features and internal nodes as clusters of the features. The structured regularization with a pre-defined tree structure is based on a group-Lasso penalty, where one group is defined for each node in the tree. Such a regularization can help uncover the structured spa...

متن کامل

On the `1-`q Regularized Regression

In this paper we consider the problem of grouped variable selection in high-dimensional regression using `1-`q regularization (1 ≤ q ≤ ∞), which can be viewed as a natural generalization of the `1-`2 regularization (the group Lasso). The key condition is that the dimensionality pn can increase much faster than the sample size n, i.e. pn À n (in our case pn is the number of groups), but the numb...

متن کامل

Sufficient Conditions for Generating Group Level Sparsity in a Robust Minimax Framework

Regularization technique has become a principled tool for statistics and machine learning research and practice. However, in most situations, these regularization terms are not well interpreted, especially on how they are related to the loss function and data. In this paper, we propose a robust minimax framework to interpret the relationship between data and regularization terms for a large cla...

متن کامل

On the l1-lq Regularized Regression

In this paper we consider the problem of grouped variable selection in high-dimensional regression using `1-`q regularization (1 ≤ q ≤ ∞), which can be viewed as a natural generalization of the `1-`2 regularization (the group Lasso). The key condition is that the dimensionality pn can increase much faster than the sample size n, i.e. pn À n (in our case pn is the number of groups), but the numb...

متن کامل

Regularization Parameter Selection in the Group Lasso

This article discusses the problem of choosing a regularization parameter in the group Lasso proposed by Yuan and Lin (2006), an l1-regularization approach for producing a block-wise sparse model that has been attracted a lot of interests in statistics, machine learning, and data mining. It is important to choose an appropriate regularization parameter from a set of candidate values, because it...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010